Introduction

The objective of this study is a statistical evaluation to assess the risk of failing dissolution stage testing (Immediate-Release Dosage Forms) at 0, 24 and 36 months for the Active Pharmaceutical Ingredient (API) A
in batches of AB Fixed Dose Combination (FDC) Film Coated Tablets (FCT) packaged in blisters at dissolution time points 25 minutes and for storage conditions 25°C/60%RH and 30°C/75%RH. The “specification limit” or Q value is 80%.

Data

The dataset consists of stage I (one dissolution bath i.e., 6 vessels) dissolution (% dissolved) data for the active A in 9 batches (1, 2, 3, 4, 5, 6, 7, 8, 9) of AB FDC FCT packaged in blisters.

Data from 6 vessels were available per individual batch and stability time point. Table 1 shows the stability data structure.

## [1] "Table  1: Stability data structure"
5°C 25°C/60%RH 30°C/75%RH 40°C/75%RH 50°C
3,6,12 0,3,6,9,12,18 3,6,9,12,18 3,6 3

A listing of the data is shown in Appendix.

Methods

A Bayesian linear mixed effects model accounting for the process mean at initial, condition specific rate of change, random components due to batch-to-batch and run-to-run (possibly including other uncontrolled sources of variability which impact the analytical run) variability was fitted. The model is as follows:

\[ y_{l(ijk)} = \mu + \alpha_{i}+\gamma_{ijk}+\beta_{j}\times t_{ijk}+\epsilon_{l(ijk)}\] where

Bayes theorem

\[ P(\theta|Data) = \frac{P(Data|\theta)\times P(\theta)}{P(Data)}\] where

  • \(P(\theta|Data)\) = posterior probability of parameters (\(\theta\)) given data [updated belief]
  • \(P(Data|\theta)\) = the likelihood [the experiment or study],
  • \(P(\theta)\) = prior(s): past state of knowledge or beliefs [previous experiments/studies, expert opinion or your own belief] and
  • \(P(Data)\) = marginal probability of the data

Note that in the Bayesian linear mixed effects model above, I have stated only the likelihood (\(P(Data|\theta)\)) and nothing is mentioned about the prior(s) (\(P(\theta)\)). Note also that the complete likelihood include the family of distributions (default: Gaussian/Normal) and link function (default: identity link).

Dissolution acceptance criteria

The acceptance criteria of dissolution for Immediate-Release Dosage Forms are listed below:

  • Stage 1: Test 6 tablets (one dissolution bath). Accept the batch if each unit is not less than \(Q+5\%\), otherwise go to stage 2.

  • Stage 2: Test 6 additional tables (second dissolution bath). Accept the batch if the average of 12 units (Stage 1 \(+\) Stage 2) is equal to or greater than \(Q\), and no unit is less than \(Q-15\%\). If the batch fails Stage 2 then go to Stage 3.

  • Stage 3: Test 12 additional tables (third and fourth dissolution baths). Accept a batch if the average of 24 units (Stage 1 \(+\) Stage 2 \(+\) Stage 3) is equal to or greater than \(Q\), not more than 2 units are less than \(Q-15\%\) and no unit is less than \(Q-25\%\). If the batch fails this stage then it is rejected.

Note that, \(Q\) is the amount of dissolved active ingredient specified in the individual monograph, expressed as a percentage of the labeled content.

Results

Exploratory data analysis

Summary statistis

Table 2 shows the descriptive statistics of
dissolution data by storage condition. Note that the descriptive statistics are collapsed over stability protocol, Zytiga granulate manufacturing site, Zytiga synthesis route and stability time.

## [1] "Table  2: Summary statistics of dissolution data by storage condition"
Condition Observations Mean (SD) Minimum Maximum
5°C 162 93.6(2.27) 88.0 99.5
25°C/60%RH 324 93.2(2.23) 88.0 98.8
30°C/75%RH 270 93.4(2.2) 86.4 99.8
40°C/75%RH 108 93.8(2.18) 89.0 98.9
50°C 54 92.9(2.22) 88.9 98.0

Exploratory plots

Figure 1 shows the stability profile of dissolution data by batch and storage condition. The blue dashed line represent \(Q+5\) value (85% dissolved) and the dashed red line represent the \(Q\) value (80% dissolved).

## [1] "Figure  1: Stability profile of dissolution values (% dissolved) by batch and storage condition"

Frequentist approach

The frequentist approach restrict our view to the current experiment or studies i.e., \(P(Data|\theta)\). It focuses on only one part of the Bayes theorem and claim “ignorance” of the past state of knowledge (i.e., the prior(s) \(P(\theta)\)) on the current process or related processes.

## Final model
fmod <- lmer(Result ~ Stabtime:Condition + (1 | RunID) + (1 | Batch), data = dataset)

Glance

Table 4 shows a glance of the model fit summaries from
the frequentist approach.

tbls("Tab4", display = "full") 
## [1] "Table  4: Frequentist: model fit statistics"
glance(fmod) %>%
  knitr::kable() |> 
  kable_styling(bootstrap_options = c("striped", "condensed"))
nobs sigma logLik AIC BIC REMLcrit df.residual
918 1.685953 -1884.039 3786.079 3829.478 3768.079 909

Fixed effects

Table 5 shows the fixed effects model summaries from the frequentist approach.

tbls("Tab5", display = "full") 
## [1] "Table  5: Frequentist: fixed effects"
tidy(fmod, effects="fixed", conf.int = TRUE, conf.level = 0.95)[, -1] |> 
mutate(
  Mean = round_any(estimate, 0.01),
  StErr = round_any(std.error, 0.001),
  P.value = ifelse(
    round_any(p.value, 0.01) < 0.01,
    round_any(0.01, 0.01),
    round_any(p.value, 0.01)
  ),
  LCL = round_any(conf.low, 0.01), UCL = round_any(conf.high, 0.01),
  "Mean (SD)" = glue("{format(Mean,nsmall=2)}({format(StErr,nsmall=3)})"),
  Pvalue = format(P.value,nsmall = 2),
  Term = term,
  "95% CI" = glue("({format(LCL,nsmall=2)};{format(UCL,nsmall=2)})")) |> 
  dplyr::select(Term, `Mean (SD)`, `95% CI`, Pvalue) |>  
  knitr::kable() |> 
  kable_styling(bootstrap_options = c("striped", "condensed"))
Term Mean (SD) 95% CI Pvalue
(Intercept) 93.08(0.436) (92.12;94.04) 0.01
Stabtime:Condition5°C 0.06(0.035) (-0.01; 0.13) 0.08
Stabtime:Condition25°C/60%RH 0.02(0.022) (-0.03; 0.06) 0.48
Stabtime:Condition30°C/75%RH 0.04(0.022) (-0.01; 0.08) 0.10
Stabtime:Condition40°C/75%RH 0.15(0.069) ( 0.01; 0.28) 0.04
Stabtime:Condition50°C -0.07(0.144) (-0.36; 0.21) 0.62

Variance components

Table 6 shows the variance components summaries from the frequentist approach.

tbls("Tab6", display = "full") 
## [1] "Table  6: Frequentist: variance components"
tidy(fmod,effects = "ran_pars")[, -c(1,3)] %>% 
  mutate(
    VarianceComponent = group,
    Mean_ = round_any(estimate, 0.01),
    Mean = format(Mean_, nsmall=2)
  ) %>% 
  dplyr::select(VarianceComponent, Mean) %>% 
  knitr::kable() %>% 
  kable_styling(bootstrap_options = c("striped", "condensed"))
VarianceComponent Mean
RunID 0.94
Batch 1.17
Residual 1.69

Augmented data

Table 7 shows the augmented data from the frequentist approach.

tbls("Tab7", display = "full") 
## [1] "Table  7: Frequentist: augmented data"
head(
  augment(fmod)[, c(1:9,16)]) %>%
  knitr::kable() %>% 
  kable_styling(bootstrap_options = c("striped", "condensed"))
Result Stabtime Condition RunID Batch .fitted .resid .hat .cooksd .wtres
97.43350 3 5°C 1 1 93.3023 4.1311932 0.1123238 0.1426499 4.1311932
90.97686 3 5°C 1 1 93.3023 -2.3254480 0.1123238 0.0451995 -2.3254480
92.10106 3 5°C 1 1 93.3023 -1.2012475 0.1123238 0.0120611 -1.2012475
92.82188 3 5°C 1 1 93.3023 -0.4804286 0.1123238 0.0019292 -0.4804286
95.06465 3 5°C 1 1 93.3023 1.7623440 0.1123238 0.0259598 1.7623440
93.09587 3 5°C 1 1 93.3023 -0.2064378 0.1123238 0.0003562 -0.2064378

Bayesian approach: JAGS

Bayes theorem

The Bayesian approach utilize all the components mentioned in Bayes theorem i.e., taking the current experiment and the prior beliefs (totality of information).

\[ P(\theta|Data) = \frac{P(Data|\theta)\times P(\theta)}{P(Data)}\] ### Priors The priors are listed below:

.# Priors:

.## fixed effects

for(f in 1:6){beta[f]~ dnorm(0.0,1.0E-3)}

.## Precision priors

taue ~ dgamma(1.0E-3,1.0E-3)

taur ~ dgamma(1.0E-3,1.0E-3)

taub ~ dgamma(1.0E-3,1.0E-3)

.## variance components

sigmae2 <-1.0/(taue)

sigmar2 <-1.0/(taur)

sigmab2 <-1.0/(taub)”

Likelihood

.# Likelihood

for(k in 1:N){

y[k] ~ dnorm(mu[k], taue)

    mu[k]<-beta[1]+(beta[2]*c1[k]+beta[3]*c2[k]+beta[4]*c3[k]+beta[5]*c4[k]+
           beta[6]*c5[k])*t[k]+b[bid[k]]+r[rid[k]]
      
    }

    for (i in 1:B) {b[i] ~ dnorm(0,taub)}

    for (j in 1:R) {r[j] ~ dnorm(0,taur)}

JAGS code

By putting together the likelihood and the prior we get the posterior.

model {

.# Likelihood

for(k in 1:N){

y[k] ~ dnorm(mu[k], taue)

    mu[k]<-beta[1]+(beta[2]*c1[k]+beta[3]*c2[k]+beta[4]*c3[k]+beta[5]*c4[k]+
           beta[6]*c5[k])*t[k]+b[bid[k]]+r[rid[k]]
      
    }

    for (i in 1:B) {b[i] ~ dnorm(0,taub)}

    for (j in 1:R) {r[j] ~ dnorm(0,taur)}

.# Priors:

.## fixed effects

for(f in 1:6){beta[f]~ dnorm(0.0,1.0E-3)}

.## Precision priors

taue ~ dgamma(1.0E-3,1.0E-3)

taur ~ dgamma(1.0E-3,1.0E-3)

taub ~ dgamma(1.0E-3,1.0E-3)

.## variance components

sigmae2 <-1.0/(taue)

sigmar2 <-1.0/(taur)

sigmab2 <-1.0/(taub) }

Run model

#----------------------------- Global parameters ------------------------------#
nsims <- 2000
nburn <- 1000
nthin <- 1
#------------------------ Tolerance interval parameters -----------------------#
Betat <- 0.95 # content
Gammat <- 0.95 # confidence

#------------------------- Specifications (Q value)  --------------------------#
Q <- 80
#------------------------------ Analysis subset -------------------------------#
N <- dim(dataset)[1]
B <- max(as.numeric(dataset$Batch))
R <- max(as.numeric(dataset$RunID))
bid <- dataset$Batch
rid <- dataset$RunID
y <- dataset$Result
t <- dataset$Stabtime
c1 <- ifelse(dataset$Condition == unique(dataset$Condition)[1], 1, 0)
c2 <- ifelse(dataset$Condition == unique(dataset$Condition)[2], 1, 0)
c3 <- ifelse(dataset$Condition == unique(dataset$Condition)[3], 1, 0)
c4 <- ifelse(dataset$Condition == unique(dataset$Condition)[4], 1, 0)
c5 <- ifelse(dataset$Condition == unique(dataset$Condition)[5], 1, 0)

## for simulation:
B2 <- 20
N2 <- R2 <- 400
rid2 <- 1:R2
bid2 <- rep(1:20, each = 20)
#-------------------------------- jags subset ---------------------------------#
dataseta <- c(
  "N", "N2", "B", "B2", "R", "R2", "y", "t",
  "c1", "c2", "c3", "c4", "c5", "bid", "bid2",
  "rid", "rid2", "Q"
)

#--------------------------------- Parameters ---------------------------------#
#All parameters of interest (modeled and derived)
param<-c(
  "beta","sigmae2","sigmar2","sigmab2",
  "poos0","poos25C24","poos25C36","poos30C24","poos30C36"
)

#Only modeled parameters
param0 <- c("beta", "sigmae2", "sigmar2", "sigmab2")

#------------------------------ Initial values --------------------------------#
inits <- list(
  list(
    beta = c(89, 0.04, 0.02, 0.04, 0.14, -0.07),
    taue = 1,
    taub = 1.25,
    taur = 1, 
    b = rnorm(B, 0, 1),
    r = rnorm(R, 0, 1)
  ),
  list(
    beta = c(93, 0.08, 0.01, 0.03, 0.12, -0.08),
    taue = 1,
    taub = 1.48,
    taur = 1, 
    b = rnorm(B, 0, 1),
    r = rnorm(R, 0, 1)
  ),
  list(
    beta = c(95, 0.06, 0.03, 0.02, 0.16, -0.09),
    taue = 1,
    taub = 1.30,
    taur = 1, 
    b = rnorm(B, 0, 1),
    r = rnorm(R, 0, 1)
  )
)

#---------------------------------- Run jags ----------------------------------#
# Start the clock!
ptm <- proc.time()
bmodj<- jags(
  data = dataseta,
  inits = inits,
  n.chains = 3,
  param,
  n.burnin = nburn, 
  n.iter = nsims,
  n.thin = nthin,
  model.file = "model.jags", 
  DIC = TRUE
)
## Compiling model graph
##    Resolving undeclared variables
##    Allocating nodes
## Graph information:
##    Observed stochastic nodes: 918
##    Unobserved stochastic nodes: 50191
##    Total graph size: 107540
## 
## Initializing model
bmodj0 <- jags(
  data = dataseta,
  inits = inits,
  n.chains = 3,
  param0,
  n.burnin = nburn, 
  n.iter = nsims,
  n.thin = nthin,
  model.file = "model.jags",
  DIC = TRUE
)
## Compiling model graph
##    Resolving undeclared variables
##    Allocating nodes
## Graph information:
##    Observed stochastic nodes: 918
##    Unobserved stochastic nodes: 50191
##    Total graph size: 107540
## 
## Initializing model
# Stop the clock
proc.time() - ptm
##    user  system elapsed 
##  16.516   0.282  17.111

Convergence diagnostics

#--------------------------- Convergence diagnostics --------------------------#
plot(bmodj0)

S <- as.mcmc(bmodj0)
plot(S)

crosscorr.plot(S)

#gelman.diag(S)
#gelman.plot(S)

Model summaries

#------------------------------ Posterior summary -----------------------------#
print(bmodj)
## Inference for Bugs model at "model.jags", fit using jags,
##  3 chains, each with 2000 iterations (first 1000 discarded)
##  n.sims = 3000 iterations saved
##               mu.vect sd.vect     2.5%      25%      50%      75%    97.5%
## beta[1]        93.057   0.465   92.140   92.774   93.060   93.344   93.976
## beta[2]         0.063   0.036   -0.006    0.038    0.063    0.087    0.132
## beta[3]         0.016   0.022   -0.027    0.001    0.016    0.031    0.059
## beta[4]         0.037   0.022   -0.009    0.023    0.037    0.053    0.080
## beta[5]         0.147   0.069    0.011    0.101    0.147    0.193    0.284
## beta[6]        -0.068   0.145   -0.350   -0.165   -0.067    0.030    0.209
## poos0[1]        0.003   0.010    0.000    0.000    0.000    0.002    0.023
## poos0[2]        0.000   0.000    0.000    0.000    0.000    0.000    0.000
## poos0[3]        0.000   0.000    0.000    0.000    0.000    0.000    0.000
## poos25C24[1]    0.002   0.008    0.000    0.000    0.000    0.002    0.018
## poos25C24[2]    0.000   0.000    0.000    0.000    0.000    0.000    0.000
## poos25C24[3]    0.000   0.000    0.000    0.000    0.000    0.000    0.000
## poos25C36[1]    0.002   0.008    0.000    0.000    0.000    0.002    0.020
## poos25C36[2]    0.000   0.000    0.000    0.000    0.000    0.000    0.000
## poos25C36[3]    0.000   0.000    0.000    0.000    0.000    0.000    0.000
## poos30C24[1]    0.001   0.006    0.000    0.000    0.000    0.000    0.013
## poos30C24[2]    0.000   0.000    0.000    0.000    0.000    0.000    0.000
## poos30C24[3]    0.000   0.000    0.000    0.000    0.000    0.000    0.000
## poos30C36[1]    0.001   0.005    0.000    0.000    0.000    0.000    0.007
## poos30C36[2]    0.000   0.000    0.000    0.000    0.000    0.000    0.000
## poos30C36[3]    0.000   0.000    0.000    0.000    0.000    0.000    0.000
## sigmab2         1.796   1.254    0.565    1.020    1.455    2.162    5.117
## sigmae2         2.855   0.150    2.572    2.755    2.849    2.954    3.159
## sigmar2         0.900   0.175    0.600    0.778    0.885    0.999    1.282
## deviance     3565.547  19.809 3528.505 3552.309 3565.095 3577.854 3606.218
##               Rhat n.eff
## beta[1]      1.001  3000
## beta[2]      1.001  3000
## beta[3]      1.001  3000
## beta[4]      1.001  3000
## beta[5]      1.001  2900
## beta[6]      1.001  2200
## poos0[1]     1.022  1600
## poos0[2]     1.000     1
## poos0[3]     1.000     1
## poos25C24[1] 1.032  1800
## poos25C24[2] 1.000     1
## poos25C24[3] 1.000     1
## poos25C36[1] 1.032  1700
## poos25C36[2] 1.291  3000
## poos25C36[3] 1.000     1
## poos30C24[1] 1.028  1500
## poos30C24[2] 1.000     1
## poos30C24[3] 1.000     1
## poos30C36[1] 1.048   860
## poos30C36[2] 1.000     1
## poos30C36[3] 1.000     1
## sigmab2      1.001  3000
## sigmae2      1.001  3000
## sigmar2      1.002  1100
## deviance     1.003   810
## 
## For each parameter, n.eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor (at convergence, Rhat=1).
## 
## DIC info (using the rule, pD = var(deviance)/2)
## pD = 195.8 and DIC = 3761.4
## DIC is an estimate of expected predictive error (lower deviance is better).

Bayesian approach: brms

brms R package: Bayesian Regression Models using Stan

The Bayesian approach has a steep learning curve. That is, not only does one have to deal with the likelihood specification but one has also to specify the priors when using software such as BUGS, JAGS, Nimble and Stan.

The R package brms try to flatten the “steep learning curve” by allowing an easy transition from the frequentist approach. The package brms mimics the algorithm of the widely used frequentist R package lme4. Let us fit our first Bayesian model.

Table 8 shows how to instantly switch from the frequentist approach to the Bayesian approach. You just need to change one thing: lmer -> brm.

tbls("Tab8", display = "full")
## [1] "Table  8: Instantly switching from frequentist[lme4/lmer] to Bayesian[brm/brm]"
# ## lme4: frequentist approach
# fmod <- lmer(Result~Stabtime:Condition+(1|RunID)+(1|Batch),data=dataset)
# ## brms: Bayesian approach
bmod0 <-  brm(
  Result ~ Stabtime:Condition + (1 | RunID) + (1 | Batch),
  data = dataset
)
# # save brms object:
# saveRDS(bmod0, "output/bmod0.rds")
# read brms object:
# bmod0 <- readRDS("output/bmod0.rds")
# summary(bmod0)
tidy(bmod0) %>%
  knitr::kable() %>% 
  kable_styling(bootstrap_options = c("striped", "condensed"))
effect component group term estimate std.error conf.low conf.high
fixed cond NA (Intercept) 93.0751136 0.4886223 92.1082973 94.0699502
fixed cond NA Stabtime:Condition5°C 0.0613908 0.0355661 -0.0091998 0.1306468
fixed cond NA Stabtime:Condition25°CD60%RH 0.0152769 0.0223872 -0.0282175 0.0586618
fixed cond NA Stabtime:Condition30°CD75%RH 0.0370023 0.0222625 -0.0072406 0.0807750
fixed cond NA Stabtime:Condition40°CD75%RH 0.1450490 0.0693874 0.0071687 0.2820907
fixed cond NA Stabtime:Condition50°C -0.0694587 0.1450109 -0.3494239 0.2134375
ran_pars cond Batch sd__(Intercept) 1.3586553 0.4000077 0.7943066 2.3477740
ran_pars cond RunID sd__(Intercept) 0.9425983 0.0859487 0.7760307 1.1127004
ran_pars cond Residual sd__Observation 1.6890130 0.0430681 1.6073763 1.7764436

Priors

The code below allows for one to obtain the default priors for a given likelihood. If you don’t specify the priors brms will use default priors. In case you are comfortable with the default priors but need to report the priors in your statistical report or manuscript you can also use the code.

# Get the default priors:
get_prior(
  Result ~ Stabtime:Condition + (1 | RunID) + (1 | Batch),
  data = dataset,
  family = gaussian()#,
  #link = ?
)
##                    prior     class                         coef group resp dpar
##                   (flat)         b                                             
##                   (flat)         b Stabtime:Condition25°CD60%RH                
##                   (flat)         b Stabtime:Condition30°CD75%RH                
##                   (flat)         b Stabtime:Condition40°CD75%RH                
##                   (flat)         b        Stabtime:Condition5°C                
##                   (flat)         b       Stabtime:Condition50°C                
##  student_t(3, 93.3, 2.5) Intercept                                             
##     student_t(3, 0, 2.5)        sd                                             
##     student_t(3, 0, 2.5)        sd                              Batch          
##     student_t(3, 0, 2.5)        sd                    Intercept Batch          
##     student_t(3, 0, 2.5)        sd                              RunID          
##     student_t(3, 0, 2.5)        sd                    Intercept RunID          
##     student_t(3, 0, 2.5)     sigma                                             
##  nlpar lb ub       source
##                   default
##              (vectorized)
##              (vectorized)
##              (vectorized)
##              (vectorized)
##              (vectorized)
##                   default
##         0         default
##         0    (vectorized)
##         0    (vectorized)
##         0    (vectorized)
##         0    (vectorized)
##         0         default

You can also specify new priors by starting with the default priors but altering some of the prior statements.

# Specify your own priors:
bprior <- c(
  prior_string("normal(0,10)", class = "b"),
  prior(normal(0, 5), class = b, coef = "Stabtime:Condition5°C"),
  prior(normal(90, 10), class = "Intercept"),
  prior_(~cauchy(0, 2), class = ~sd, group = ~RunID)
)

Likelihood

The likelihood is given below:

  • \[ y_{l(ijk)} = \mu + \alpha_{i}+\gamma_{ijk}+\beta_{j}\times t_{ijk}+\epsilon_{l(ijk)}\]
  • family = Gaussian
  • link = identity link

Stan code

# Stan code:
make_stancode(
  Result ~ Stabtime:Condition + (1 | RunID) + (1 | Batch),
  data=dataset,
  family = gaussian()
)
## // generated with brms 2.19.0
## functions {
## }
## data {
##   int<lower=1> N;  // total number of observations
##   vector[N] Y;  // response variable
##   int<lower=1> K;  // number of population-level effects
##   matrix[N, K] X;  // population-level design matrix
##   // data for group-level effects of ID 1
##   int<lower=1> N_1;  // number of grouping levels
##   int<lower=1> M_1;  // number of coefficients per level
##   int<lower=1> J_1[N];  // grouping indicator per observation
##   // group-level predictor values
##   vector[N] Z_1_1;
##   // data for group-level effects of ID 2
##   int<lower=1> N_2;  // number of grouping levels
##   int<lower=1> M_2;  // number of coefficients per level
##   int<lower=1> J_2[N];  // grouping indicator per observation
##   // group-level predictor values
##   vector[N] Z_2_1;
##   int prior_only;  // should the likelihood be ignored?
## }
## transformed data {
##   int Kc = K - 1;
##   matrix[N, Kc] Xc;  // centered version of X without an intercept
##   vector[Kc] means_X;  // column means of X before centering
##   for (i in 2:K) {
##     means_X[i - 1] = mean(X[, i]);
##     Xc[, i - 1] = X[, i] - means_X[i - 1];
##   }
## }
## parameters {
##   vector[Kc] b;  // population-level effects
##   real Intercept;  // temporary intercept for centered predictors
##   real<lower=0> sigma;  // dispersion parameter
##   vector<lower=0>[M_1] sd_1;  // group-level standard deviations
##   vector[N_1] z_1[M_1];  // standardized group-level effects
##   vector<lower=0>[M_2] sd_2;  // group-level standard deviations
##   vector[N_2] z_2[M_2];  // standardized group-level effects
## }
## transformed parameters {
##   vector[N_1] r_1_1;  // actual group-level effects
##   vector[N_2] r_2_1;  // actual group-level effects
##   real lprior = 0;  // prior contributions to the log posterior
##   r_1_1 = (sd_1[1] * (z_1[1]));
##   r_2_1 = (sd_2[1] * (z_2[1]));
##   lprior += student_t_lpdf(Intercept | 3, 93.3, 2.5);
##   lprior += student_t_lpdf(sigma | 3, 0, 2.5)
##     - 1 * student_t_lccdf(0 | 3, 0, 2.5);
##   lprior += student_t_lpdf(sd_1 | 3, 0, 2.5)
##     - 1 * student_t_lccdf(0 | 3, 0, 2.5);
##   lprior += student_t_lpdf(sd_2 | 3, 0, 2.5)
##     - 1 * student_t_lccdf(0 | 3, 0, 2.5);
## }
## model {
##   // likelihood including constants
##   if (!prior_only) {
##     // initialize linear predictor term
##     vector[N] mu = rep_vector(0.0, N);
##     mu += Intercept;
##     for (n in 1:N) {
##       // add more terms to the linear predictor
##       mu[n] += r_1_1[J_1[n]] * Z_1_1[n] + r_2_1[J_2[n]] * Z_2_1[n];
##     }
##     target += normal_id_glm_lpdf(Y | Xc, mu, b, sigma);
##   }
##   // priors including constants
##   target += lprior;
##   target += std_normal_lpdf(z_1[1]);
##   target += std_normal_lpdf(z_2[1]);
## }
## generated quantities {
##   // actual population-level intercept
##   real b_Intercept = Intercept - dot_product(means_X, b);
## }

Run model

Combining the likelihood and the priors leads to the posterior distribution of the parameters of interest. The summaries of these parameters are then derived from the posterior distribution.

Table 9 shows model summary based on the Bayesian approach.

tbls("Tab9", display = "full")
## [1] "Table  9: Model summary based on the Bayesian approach"
# Run model:
#--- Comment out [Shift+Ctrl+C] to avoid a re-run after saving the object
bmod <-  brm(
  Result ~ Stabtime:Condition + (1 | RunID) + (1 | Batch),
  data = dataset,
  family = gaussian(),
  chains = 3,
  iter = 5000,
  warmup = 1000,
  thin = 1,
  prior = bprior,
  save_pars = save_pars(group = TRUE)
)
#----
# save brms object:
# saveRDS(bmod, "Output/bmod.rds")
# read brms object:
#bmod <- readRDS("Output/bmod.rds")
# Print model summaries
tidy(bmod) %>%
  knitr::kable() %>% 
  kable_styling(bootstrap_options = c("striped", "condensed"))
effect component group term estimate std.error conf.low conf.high
fixed cond NA (Intercept) 93.0616493 0.5077234 92.0405678 94.0682127
fixed cond NA Stabtime:Condition5°C 0.0626484 0.0356293 -0.0070539 0.1323384
fixed cond NA Stabtime:Condition25°CD60%RH 0.0155990 0.0223615 -0.0288377 0.0589625
fixed cond NA Stabtime:Condition30°CD75%RH 0.0369575 0.0221979 -0.0059719 0.0806565
fixed cond NA Stabtime:Condition40°CD75%RH 0.1454753 0.0691554 0.0104095 0.2816676
fixed cond NA Stabtime:Condition50°C -0.0684374 0.1460450 -0.3545383 0.2201570
ran_pars cond Batch sd__(Intercept) 1.3605750 0.4139780 0.7921396 2.3838201
ran_pars cond RunID sd__(Intercept) 0.9451582 0.0885632 0.7746837 1.1229415
ran_pars cond Residual sd__Observation 1.6887866 0.0434060 1.6053337 1.7762506

Convergence diagnostics

Convergence diagnostics are essential prior to utilizing the results of a Bayesian analysis. There are many convergence diagnostics but in this study we will limit to just a few.

Names of parameters

# names of parameters
#head(parnames(bmod),n=21) #deprecated
head(variables(bmod), n=21)
##  [1] "b_Intercept"                    "b_Stabtime:Condition5°C"       
##  [3] "b_Stabtime:Condition25°CD60%RH" "b_Stabtime:Condition30°CD75%RH"
##  [5] "b_Stabtime:Condition40°CD75%RH" "b_Stabtime:Condition50°C"      
##  [7] "sd_Batch__Intercept"            "sd_RunID__Intercept"           
##  [9] "sigma"                          "r_Batch[1,Intercept]"          
## [11] "r_Batch[2,Intercept]"           "r_Batch[3,Intercept]"          
## [13] "r_Batch[4,Intercept]"           "r_Batch[5,Intercept]"          
## [15] "r_Batch[6,Intercept]"           "r_Batch[7,Intercept]"          
## [17] "r_Batch[8,Intercept]"           "r_Batch[9,Intercept]"          
## [19] "r_RunID[1,Intercept]"           "r_RunID[2,Intercept]"          
## [21] "r_RunID[3,Intercept]"

Plot function

#Trace and Density Plots for MCMC Draws
## All parameters:
plot(bmod, ask=FALSE)

## fixed effects:
#plot(bmod, variable = "b_Intercept")
plot(bmod, variable = "b_Stabtime:Condition25°CD60%RH")

## all fixed effects (regular expression):
plot(bmod, variable = "^b_", regex = TRUE)

## variance components:
plot(bmod, variable = "sigma")

# plot(bmod, variable = "sd_RunID__Intercept")
# plot(bmod, variable = "sd_Batch__Intercept")
## variance components excluding residual variance:
# plot(bmod, variable = "^sd_", regex = TRUE)
## all variance components:
plot(bmod, variable = "^s", regex = TRUE)

Pairs function

pairs(bmod, variable = variables(bmod)[1:3])

# pairs(bmod, variable = "^b_", regex = TRUE)
# pairs(bmod, variable = "^sd_", regex = TRUE)
pairs(bmod, variable = "^s", regex = TRUE)

MCMCPlots package

#Hist, Density, Trace plots
mcmc_plot(bmod, type = "hist") 

#mcmc_plot(bmod,type="dens_overlay")
mcmc_plot(bmod, type = "trace")

#Acf, Rhat, Neff 
mcmc_plot(bmod, type = "acf")#"acf_bar"

mcmc_plot(bmod, type = "rhat")

#mcmc_plot(bmod,type="neff")

CODA package

#Trace and Density Plots for MCMC Draws
bmod.mcmc <- as.mcmc(bmod)
gelman.diag(bmod.mcmc[, 1:10])
## Potential scale reduction factors:
## 
##                                Point est. Upper C.I.
## b_Intercept                             1       1.00
## b_Stabtime:Condition5°C                 1       1.01
## b_Stabtime:Condition25°CD60%RH          1       1.00
## b_Stabtime:Condition30°CD75%RH          1       1.01
## b_Stabtime:Condition40°CD75%RH          1       1.00
## b_Stabtime:Condition50°C                1       1.00
## sd_Batch__Intercept                     1       1.00
## sd_RunID__Intercept                     1       1.00
## sigma                                   1       1.00
## r_Batch[1,Intercept]                    1       1.00
## 
## Multivariate psrf
## 
## 1
gelman.plot(bmod.mcmc[, 1:10])

# geweke.diag(bmod.mcmc[, 1:10])
# geweke.plot(bmod.mcmc[, 1:10])
# autocorr(bmod.mcmc[, 1:10])
# autocorr.diag(bmod.mcmc[, 1:10])
# autocorr.plot(bmod.mcmc[, 1:10])
# crosscorr(bmod.mcmc[, 1:10])
crosscorr.plot(bmod.mcmc[, 1:10])

#coda.menu()

Shiny Stan

# shiny app:
#launch_shinystan(bmod)# not running

Model update

#Update model model:
bmod2 <-  update(
  bmod,
  formula. = ~ . - (1|RunID),
  #newdata  = dataset,
  # family=student(),
  # prior=bprior2, # prior= set_prior("normal(0,5)"),
  chains = 3,
  iter = 20000,
  warmup = 10000,
  thin = 10
)
#save brms object:
saveRDS(bmod2, "output/bmod2.rds")
#read brms object:
bmod2 <- readRDS("output/bmod2.rds")

Model comparison

# WAIC:
bmod.waic  <- waic(bmod)
bmod2.waic <- waic(bmod2)
# compare both models
compare_ic(bmod.waic, bmod2.waic)
##                 WAIC    SE
## bmod         3675.98 42.66
## bmod2        3811.67 43.01
## bmod - bmod2 -135.69 22.20
# # LOO:
# bmod.loo  <- waic(bmod)
# bmod2.loo <- waic(bmod2)
# # compare both models
# compare_ic(bmod.loo, bmod2.loo)
#loo_compare(x, ..., criterion = c("loo", "waic", "kfold"), model_names = NULL)
# WAIC:
## add waic to the models
bmod.add.waic  <- add_criterion(bmod, "waic")
bmod2.add.waic <- add_criterion(bmod2, "waic")
## compare both models
loo_compare(bmod.add.waic, bmod2.add.waic, criterion = "waic")
##                elpd_diff se_diff
## bmod.add.waic    0.0       0.0  
## bmod2.add.waic -67.8      11.1
# # LOO:
# ## add loo to the models
# bmod.add.loo  <- add_criterion(bmod, "loo")
# bmod2.add.loo <- add_criterion(bmod2, "loo")
# ## compare both models
# loo_compare(bmod.add.loo, bmod2.add.loo, criterion = "loo")
# # KFOLD:
# ## add kfold to the models
# bmod.add.kfold  <- add_criterion(bmod, "kfold")
# bmod2.add.kfold <- add_criterion(bmod2, "kfold")
# ## compare both models
# loo_compare(bmod.add.kfold, bmod2.add.kfold, criterion = "kfold")

Model summaries

Table 10 shows the summary of the final model.

tbls("Tab10", display = "full")
## [1] "Table  10: Summary of the final model"
#Run model:
tidy(bmod) %>%
  knitr::kable() %>% 
  kable_styling(bootstrap_options = c("striped", "condensed"))
effect component group term estimate std.error conf.low conf.high
fixed cond NA (Intercept) 93.0616493 0.5077234 92.0405678 94.0682127
fixed cond NA Stabtime:Condition5°C 0.0626484 0.0356293 -0.0070539 0.1323384
fixed cond NA Stabtime:Condition25°CD60%RH 0.0155990 0.0223615 -0.0288377 0.0589625
fixed cond NA Stabtime:Condition30°CD75%RH 0.0369575 0.0221979 -0.0059719 0.0806565
fixed cond NA Stabtime:Condition40°CD75%RH 0.1454753 0.0691554 0.0104095 0.2816676
fixed cond NA Stabtime:Condition50°C -0.0684374 0.1460450 -0.3545383 0.2201570
ran_pars cond Batch sd__(Intercept) 1.3605750 0.4139780 0.7921396 2.3838201
ran_pars cond RunID sd__(Intercept) 0.9451582 0.0885632 0.7746837 1.1229415
ran_pars cond Residual sd__Observation 1.6887866 0.0434060 1.6053337 1.7762506

Table 11 shows the summary of variance components.

tbls("Tab11", display = "full")
## [1] "Table  11: Variance Components"
tidy(
  bmod,
  parameters = NA, #parameters = "^s_"
  effects = "ran_pars",
  robust = TRUE, # Default Option: FALSE [mean]
  conf.level = 0.95,
  conf.method = "quantile" #c("quantile", "HPDinterval")
) |>  
  mutate(
    VarianceComponent = group,
    Median = round_any(estimate, 0.01), 
    MAD = round_any(std.error, 0.001),
    LCL = round_any(conf.low, 0.1),
    UCL = round_any(conf.high, 0.1),
    "Median (MAD)" = glue("{format(Median,nsmall=2)} ({format(MAD,nsmall=3)})"),
    "95% CI" = glue("({format(LCL,nsmall=2)} ;{format(UCL,nsmall=2)})")
  ) |>
  dplyr::select(VarianceComponent, `Median (MAD)`, `95% CI`) |> 
  knitr::kable() |> 
  kable_styling(bootstrap_options = c("striped", "condensed"))
VarianceComponent Median (MAD) 95% CI
Batch 1.28 (0.350) (0.80 ;2.40)
RunID 0.94 (0.089) (0.80 ;1.10)
Residual 1.69 (0.044) (1.60 ;1.80)

Table 12 shows the summary of the fixed effects.

tbls("Tab12", display = "full")
## [1] "Table  12: Fixed Effects"
tidy(
  bmod,
  parameters = NA, 
  effects="fixed",
  robust     = FALSE, 
  conf.level = 0.95,
  conf.method = "quantile" #c("quantile", "HPDinterval")
) |> 
  mutate(
    Term = term,
    Mean = round_any(estimate,0.1), 
    SD = round_any(std.error,0.01),
    LCL = round_any(conf.low,0.1),
    UCL = round_any(conf.high,0.1),
    "Mean (SD)" = glue("{format(Mean,nsmall=1)} ({format(SD,nsmall=2)})"),
    "95% CI" = glue("({format(LCL,nsmall=1)} ;{format(UCL,nsmall=1)})")
  ) |> 
  dplyr::select(Term,`Mean (SD)`,`95% CI`) |> 
  knitr::kable() |> 
  kable_styling(bootstrap_options = c("striped", "condensed"))
Term Mean (SD) 95% CI
(Intercept) 93.1 (0.51) (92.0 ;94.1)
Stabtime:Condition5°C 0.1 (0.04) ( 0.0 ; 0.1)
Stabtime:Condition25°CD60%RH 0.0 (0.02) ( 0.0 ; 0.1)
Stabtime:Condition30°CD75%RH 0.0 (0.02) ( 0.0 ; 0.1)
Stabtime:Condition40°CD75%RH 0.1 (0.07) ( 0.0 ; 0.3)
Stabtime:Condition50°C -0.1 (0.15) (-0.4 ; 0.2)

Table 13 shows the summary of the annual/yearly rate of change.

tbls("Tab13", display = "full")
## [1] "Table  13: The Annual Rate of Change"
tidy(
  bmod,
  parameters = NA, 
  effects = "fixed",
  robust = FALSE, 
  conf.level = 0.95,
  conf.method = "quantile" #c("quantile", "HPDinterval")
)[3:4, ] |> 
  mutate(
    Term = term,
    Mean = round_any(estimate*12, 0.1), 
    SD = round_any(std.error*12, 0.01),
    LCL = round_any(conf.low*12, 0.1),
    UCL = round_any(conf.high*12, 0.1),
    "Annual Change (SD)" = glue("{format(Mean,nsmall=1)} ({format(SD,nsmall=2)})"),
    "95% CI" = glue("({format(LCL,nsmall=1)} ;{format(UCL,nsmall=1)})")
  ) |> 
  dplyr::select(Term, `Annual Change (SD)`, `95% CI`) |> 
  knitr::kable() |> 
  kable_styling(bootstrap_options = c("striped", "condensed"))
Term Annual Change (SD) 95% CI
Stabtime:Condition25°CD60%RH 0.2 (0.27) (-0.3 ;0.7)
Stabtime:Condition30°CD75%RH 0.4 (0.27) (-0.1 ;1.0)

Predictions

# Predictions: Keeps all random effects and residual errors
ResultP <- predict(bmod, newdata = new_dataset)
head(ResultP)
##      Estimate Est.Error     Q2.5    Q97.5
## [1,] 93.28324  1.751280 89.79002 96.66509
## [2,] 93.32082  1.785297 89.79962 96.77740
## [3,] 93.31649  1.780641 89.85731 96.83685
## [4,] 93.32593  1.774365 89.78610 96.80694
## [5,] 93.27931  1.778976 89.74988 96.77281
## [6,] 93.30531  1.780220 89.75325 96.74509
# Fitted values: Drops the residual errors
ResultF <- fitted(bmod, newdata = new_dataset) 
head(ResultF)
##      Estimate Est.Error     Q2.5   Q97.5
## [1,] 93.29066  0.567318 92.19336 94.4017
## [2,] 93.29066  0.567318 92.19336 94.4017
## [3,] 93.29066  0.567318 92.19336 94.4017
## [4,] 93.29066  0.567318 92.19336 94.4017
## [5,] 93.29066  0.567318 92.19336 94.4017
## [6,] 93.29066  0.567318 92.19336 94.4017
# Fitted values less analytical variability and residual error
ResultPV <- predict(bmod, re_formula = ~ (1 | Batch), newdata = new_dataset) 
head(ResultPV)
##      Estimate Est.Error     Q2.5    Q97.5
## [1,] 92.77727  1.712928 89.40508 96.13225
## [2,] 92.76343  1.715238 89.42200 96.14242
## [3,] 92.75893  1.715519 89.41040 96.12004
## [4,] 92.75458  1.700492 89.48961 96.07890
## [5,] 92.78387  1.723156 89.43468 96.15366
## [6,] 92.76023  1.725143 89.37315 96.14167
## `Predicted data`:
pred_dataset <- cbind(new_dataset, ResultP)
head(pred_dataset)
##   Condition Stabtime Batch Vessel   Result RunID Estimate Est.Error     Q2.5
## 1       5°C        3     1      1 97.43350     1 93.28324  1.751280 89.79002
## 2       5°C        3     1      2 90.97686     1 93.32082  1.785297 89.79962
## 3       5°C        3     1      3 92.10106     1 93.31649  1.780641 89.85731
## 4       5°C        3     1      4 92.82188     1 93.32593  1.774365 89.78610
## 5       5°C        3     1      5 95.06465     1 93.27931  1.778976 89.74988
## 6       5°C        3     1      6 93.09587     1 93.30531  1.780220 89.75325
##      Q97.5
## 1 96.66509
## 2 96.77740
## 3 96.83685
## 4 96.80694
## 5 96.77281
## 6 96.74509

Figure 2 shows the predicted stability profile of dissolution data by batch and storage condition. The blue dashed line represent \(Q+5\) value (85% dissolved) and the dashed red line represent the \(Q\) value (80% dissolved).

ggplot(
  pred_dataset,
  aes(x = Stabtime, y = Result,  group=Batch,col = Batch)
) + 
  geom_jitter(width = 0.2,alpha = 0.1) + 
  geom_line(aes(y = Estimate)) +
  facet_grid(. ~ Condition) +  
  xlab("Stability time (months)") + 
  ylab("Dissolution value (% dissolved)") +
  geom_hline(yintercept = 85, col ="blue", linetype = "dashed") + 
  geom_hline(yintercept = 80, col = "red", linetype = "dashed" ) +   
  theme(
    axis.ticks = element_blank(),
    axis.text.x = element_text(angle = 0, hjust = 1, vjust = 0.5)
  )

figs("Fig2", display = "full")
## [1] "Figure  2: Predicted stability profile of dissolution values (% dissolved) by batch and storage condition"

Post processing (Calculating Prob. of Failure):

Table 14 shows the risk of failing dissolution stage testing.

### Results: Probability of Failure
tbls("Tab14", display = "full")
## [1] "Table  14: "
ConStab <- cbind(
  CC = 2:3,
  expand.grid(
    Condition = levels(dataset$Condition)[2:3],
    Stabtime = c(0, 24, 36)
  )
)
J <- dim(ConStab)[1]
obj <- bmod2
postsamp <- as_draws_matrix(obj)
postsamp <- posterior_samples(obj)

Tmp <- NULL
  for(j in 1:J){
    tmpp<- tmp <- NULL
    tmpp <- round(
      100 * disso_prob(
        (postsamp[,1] + ConStab$Stabtime[j]*postsamp[,(ConStab$CC[j]+1)]),
        postsamp[,8],
        postsamp[,7],
        postsamp[,9]
      )[1:3],
      2
    )
    tmp <- data.frame(
      Condition = ConStab$Condition[j],
      Stabtime = ConStab$Stabtime[j],
      PFS1 = tmpp[1],
      PFS12 = tmpp[2],
      PFS123 = tmpp[3]
    )
    Tmp <- rbind(Tmp, tmp)
  }

POF <- Tmp |>
  filter(!((Condition == "30°C/75%RH") & (Stabtime == 0))) |>
  mutate(Condition=ifelse(Stabtime == 0, "", as.character(Condition)))
rownames(POF) <- NULL
POF <- as.data.frame(POF)
saveRDS(POF, "output/pof.rds")
POF <- readRDS("output/pof.rds")
POF %>% mutate(
  `Storage condition` = Condition,
  `Stability time` = Stabtime,
  `PF Stage I` = PFS1,
  `PF Stage II` = PFS12,
  `PF Stage III` = PFS123
) %>% 
  dplyr::select(
    `Storage condition`,
    `Stability time`,
    `PF Stage I`, 
    `PF Stage II`,
    `PF Stage III`
  ) %>% 
  knitr::kable() %>% 
  kable_styling(bootstrap_options = c("striped", "condensed"))
Storage condition Stability time PF Stage I PF Stage II PF Stage III
0 0.07 0 0
25°C/60%RH 24 0.00 0 0
30°C/75%RH 24 0.10 0 0
25°C/60%RH 36 0.00 0 0
30°C/75%RH 36 0.00 0 0

Conclusion

The objective of this study is a statistical evaluation to assess the risk of failing dissolution stage testing (Immediate-Release Dosage Forms) at 0, 24 and 36 months for the Active Pharmaceutical Ingredient (API) A in batches of AB FDC FCTs packaged in blisters at dissolution time point 25 minutes and for storage conditions 25°C/60%RH and 30°C/75%RH. The “specification limit” or Q value is 80%.

The risk of failing dissolution stage testing was computed for storage conditions 25°C/60%RH and 30°C/75%RH at initial, 24 and 36 months stability time points for dissolution time point 25 minutes and 30 minutes. There was negligible risk of failing all stages of dissolution stage testing which implied that there was negligible risk of batch rejection.

References

Software

  1. Bürkner P (2021). “Bayesian Item Response Modeling in R with brms and Stan.” Journal of Statistical Software, 100(5), 1–54. doi: 10.18637/jss.v100.i05.
  2. Bürkner P (2018). “Advanced Bayesian Multilevel Modeling with the R Package brms.” The R Journal, 10(1), 395–411. doi: 10.32614/RJ-2018-017.
  3. Bürkner P (2017). “brms: An R Package for Bayesian Multilevel Models Using Stan.” Journal of Statistical Software, 80(1), 1–28. doi: 10.18637/jss.v080.i01.
  4. Wickham H, Averick M, Bryan J, Chang W, McGowan LD, François R, Grolemund G, Hayes A, Henry L, Hester J, Kuhn M, Pedersen TL, Miller E, Bache SM, Müller K, Ooms J, Robinson D, Seidel DP, Spinu V, Takahashi K, Vaughan D, Wilke C, Woo K, Yutani H (2019). “Welcome to the tidyverse.” Journal of Open Source Software, 4(43), 1686. doi: 10.21105/joss.01686.
  5. Plummer M, Best N, Cowles K, Vines K (2006). “CODA: Convergence Diagnosis and Output Analysis for MCMC.” R News, 6(1), 7–11. https://journal.r-project.org/archive/.

Appendix I: R Data Listings

Appendix II: R Session Information

## R version 4.3.0 (2023-04-21)
## Platform: aarch64-apple-darwin20 (64-bit)
## Running under: macOS Ventura 13.4
## 
## Matrix products: default
## BLAS:   /Library/Frameworks/R.framework/Versions/4.3-arm64/Resources/lib/libRblas.0.dylib 
## LAPACK: /Library/Frameworks/R.framework/Versions/4.3-arm64/Resources/lib/libRlapack.dylib;  LAPACK version 3.11.0
## 
## locale:
## [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
## 
## time zone: America/Indiana/Indianapolis
## tzcode source: internal
## 
## attached base packages:
## [1] stats     graphics  grDevices utils     datasets  methods   base     
## 
## other attached packages:
##  [1] broom.mixed_0.2.9.4  broom_1.0.4          tidybayes_3.0.4     
##  [4] superdiag_2.0        ggmcmc_1.5.1.1       bayesplot_1.10.0    
##  [7] mcmcplots_0.4.3      shinystan_2.6.0      shiny_1.7.4         
## [10] shinythemes_1.2.0    R2jags_0.7-1         rjags_4-14          
## [13] coda_0.19-4          brms_2.19.0          Rcpp_1.0.10         
## [16] rstan_2.21.8         StanHeaders_2.21.0-7 lmerTest_3.1-3      
## [19] lme4_1.1-33          Matrix_1.5-4         glue_1.6.2          
## [22] kableExtra_1.3.4     DT_0.27              readxl_1.4.2        
## [25] captioner_2.2.3      plyr_1.8.8           lubridate_1.9.2     
## [28] forcats_1.0.0        stringr_1.5.0        dplyr_1.1.2         
## [31] purrr_1.0.1          readr_2.1.4          tidyr_1.3.0         
## [34] tibble_3.2.1         ggplot2_3.4.2        tidyverse_2.0.0     
## 
## loaded via a namespace (and not attached):
##   [1] RColorBrewer_1.1-3   R2WinBUGS_2.1-21     tensorA_0.36.2      
##   [4] rstudioapi_0.14      jsonlite_1.8.4       magrittr_2.0.3      
##   [7] estimability_1.4.1   farver_2.1.1         nloptr_2.0.3        
##  [10] rmarkdown_2.21       fs_1.6.2             vctrs_0.6.2         
##  [13] minqa_1.2.5          denstrip_1.5.4       base64enc_0.1-3     
##  [16] webshot_0.5.4        htmltools_0.5.5      distributional_0.3.2
##  [19] cellranger_1.1.0     parallelly_1.35.0    sass_0.4.6          
##  [22] bslib_0.4.2          htmlwidgets_1.6.2    emmeans_1.8.6       
##  [25] zoo_1.8-12           cachem_1.0.8         sfsmisc_1.1-15      
##  [28] igraph_1.4.2         mime_0.12            lifecycle_1.0.3     
##  [31] pkgconfig_2.0.3      colourpicker_1.2.0   R6_2.5.1            
##  [34] fastmap_1.1.1        future_1.32.0        digest_0.6.31       
##  [37] numDeriv_2016.8-1.1  reshape_0.8.9        GGally_2.1.2        
##  [40] colorspace_2.1-0     furrr_0.3.1          ps_1.7.5            
##  [43] crosstalk_1.2.0      labeling_0.4.2       fansi_1.0.4         
##  [46] timechange_0.2.0     httr_1.4.6           abind_1.4-5         
##  [49] compiler_4.3.0       withr_2.5.0          backports_1.4.1     
##  [52] inline_0.3.19        highr_0.10           pkgbuild_1.4.0      
##  [55] MASS_7.3-58.4        gtools_3.9.4         loo_2.6.0           
##  [58] tools_4.3.0          httpuv_1.6.11        threejs_0.3.3       
##  [61] callr_3.7.3          nlme_3.1-162         promises_1.2.0.1    
##  [64] grid_4.3.0           checkmate_2.2.0      reshape2_1.4.4      
##  [67] generics_0.1.3       gtable_0.3.3         tzdb_0.4.0          
##  [70] hms_1.1.3            xml2_1.3.4           utf8_1.2.3          
##  [73] ggdist_3.3.0         pillar_1.9.0         markdown_1.7        
##  [76] posterior_1.4.1      later_1.3.1          splines_4.3.0       
##  [79] lattice_0.21-8       tidyselect_1.2.0     miniUI_0.1.1.1      
##  [82] knitr_1.42           arrayhelpers_1.1-0   gridExtra_2.3       
##  [85] svglite_2.1.1        stats4_4.3.0         xfun_0.39           
##  [88] bridgesampling_1.1-2 matrixStats_0.63.0   stringi_1.7.12      
##  [91] yaml_2.3.7           boot_1.3-28.1        evaluate_0.21       
##  [94] codetools_0.2-19     cli_3.6.1            RcppParallel_5.1.7  
##  [97] xtable_1.8-4         systemfonts_1.0.4    munsell_0.5.0       
## [100] processx_3.8.1       jquerylib_0.1.4      globals_0.16.2      
## [103] svUnit_1.0.6         parallel_4.3.0       rstantools_2.3.1    
## [106] ellipsis_0.3.2       prettyunits_1.1.1    dygraphs_1.1.1.6    
## [109] Brobdingnag_1.2-9    listenv_0.9.0        viridisLite_0.4.2   
## [112] mvtnorm_1.1-3        scales_1.2.1         xts_0.13.1          
## [115] crayon_1.5.2         rlang_1.1.1          rvest_1.0.3         
## [118] shinyjs_2.1.0